Random Optimization Methods
نویسنده
چکیده
D. Efficiency. In deterministic schemes, a lot P d h d t . d h of computer time is spent in deciding where the ure ran om searc , a ap lve ran om searc , . . I t d I. t . I . th I next probe pomt should be. In random search Slmu a e annea lUg, gene IC a gon ms, evo u... t. t t . t . t. t. th procedures, thIs tIme IS saved, at the expense Ion s ra egles, nonparame nc es lma Ion me -. ods bandit problems simulation optimization perhaps of a few more probes. Accordmg to the clu;terin g methods P'robabilis tl.C ut t d' minimax criterion, random search is more effi, a omaaan . h d ... random restart. These are a sample of the ranclent t an any eter~lDlstl~ search. This sa~s dom S h th d th t . d . th O that random search IS best m the worst possIble earc me 0 s a are revlewe m IS . paper. The discussion focuses on computational clrcu~st.ances. Jarvis (1975) in his ~urvey claims issues as well as behavior in difficult optimizat?at .It IS one. of the best ~ethods m the worst t. bl sItuatIon possIble (granulanty, plateaus, holes, Ion pro ems. ... dlscontmulties, high dimensionality, multimodality) and perhaps the worst method in the best Books. situation (smoothness, continuity, low dimenThis survey starts with the mention of a few sionality, unimodality)". good books on the subject matter. These inE. Flexibility. Random methods fill the enclude Zhigljavsky (1991), Torn and Zilinskas tire gap between pure random search (which (1989), Aarts and Korst (1989), Van Laarhoven tota~ly ignores any ~r~vi~usly obtained inforand Aarts (1987), Holland (1975), Schwefel (1981), matlon) and determlDlstlc methods. In fact, Schwefel and Manner (199.1), Ackley (1987), many are geared towards efficient combinations Goldberg (1989), Ermoliev and Wets (1988), and of methods. Wasan (1969). F. Information extraction. During the optimization process, the information gathered can Ad f d h be used to guide the search; this is especially vantages 0 ran om searc . f .. use ul when globalmformatlon about the shape Below are some reasons why you may wish to of the function has to be extracted. look at random search algorithms more closely. G. Easily parallelizable. Many random A. Ease of programming. Simple easily unsearch procedures either totally igllore past inderstood programs, that can be implemented on formation, or proceed with a number of simultanearly any computer. neous searches or moving clouds of points, with B. Inexpensive realization. Many of the only an occasional need for communication bemethods require very simple storage and comtween the various components. This lends itself parison facilities. There is virtually no overhead, superbly to parallelization. so that the cost of a run is virtually borne only H. Insensitivity to noise. Function evalby the number of function evaluations. uations that are perturbed by noise affect the , C. Insensitivity to the criterion function. performance of random search algorithms much Convergence of most random search procedures less than that of deterministic algorithms. Rancan be guaranteed for any function, regardless of dom search is also ideally suited for multimodal its smoothness properties or its multimodality. stochastic optimization problems. The particular shape (granularity, discontinuity, I. A simple startpoint selection method. presence of holes or plateaus) of a function has Pure random search and some of its variants can virtually no effect on most random search procebe used as a method for the selection of a suitdures. able startpoint of a local search algorithm. It is
منابع مشابه
Hybrid Probabilistic Search Methods for Simulation Optimization
Discrete-event simulation based optimization is the process of finding the optimum design of a stochastic system when the performance measure(s) could only be estimated via simulation. Randomness in simulation outputs often challenges the correct selection of the optimum. We propose an algorithm that merges Ranking and Selection procedures with a large class of random search methods for continu...
متن کاملFuzzy Reliability Optimization Models for Redundant Systems
In this paper, a special class of redundancy optimization problem with fuzzy random variables is presented. In this model, fuzzy random lifetimes are considered as basic parameters and the Er-expected of system lifetime is used as a major type of system performance. Then a redundancy optimization problem is formulated as a binary integer programming model. Furthermore, illustrative numerical ex...
متن کاملAugmented Downhill Simplex a Modified Heuristic Optimization Method
Augmented Downhill Simplex Method (ADSM) is introduced here, that is a heuristic combination of Downhill Simplex Method (DSM) with Random Search algorithm. In fact, DSM is an interpretable nonlinear local optimization method. However, it is a local exploitation algorithm; so, it can be trapped in a local minimum. In contrast, random search is a global exploration, but less efficient. Here, rand...
متن کاملDISCRETE SIZE AND DISCRETE-CONTINUOUS CONFIGURATION OPTIMIZATION METHODS FOR TRUSS STRUCTURES USING THE HARMONY SEARCH ALGORITHM
Many methods have been developed for structural size and configuration optimization in which cross-sectional areas are usually assumed to be continuous. In most practical structural engineering design problems, however, the design variables are discrete. This paper proposes two efficient structural optimization methods based on the harmony search (HS) heuristic algorithm that treat both discret...
متن کاملOptimization of Random Sample Size in Progressively Type II Censoring based on Cost Criterion
So far censored samples have been studied by many researchers. One of the most important methods of censoring is progressively type II censoring. An interesting issue in the discussion of censoring is determination of the optimal sample size. Various factors are influential in determining the appropriate sample size, the most important of which is the sampling cost criterion. In this paper, ass...
متن کاملRandomized Similar Triangles Method: A Unifying Framework for Accelerated Randomized Optimization Methods (Coordinate Descent, Directional Search, Derivative-Free Method)
In this paper, we consider smooth convex optimization problems with simple constraints and inexactness in the oracle information such as value, partial or directional derivatives of the objective function. We introduce a unifying framework, which allows to construct different types of accelerated randomized methods for such problems and to prove convergence rate theorems for them. We focus on a...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2005